116 research outputs found

    Colour alignment for relative colour constancy via non-standard references

    Full text link
    Relative colour constancy is an essential requirement for many scientific imaging applications. However, most digital cameras differ in their image formations and native sensor output is usually inaccessible, e.g., in smartphone camera applications. This makes it hard to achieve consistent colour assessment across a range of devices, and that undermines the performance of computer vision algorithms. To resolve this issue, we propose a colour alignment model that considers the camera image formation as a black-box and formulates colour alignment as a three-step process: camera response calibration, response linearisation, and colour matching. The proposed model works with non-standard colour references, i.e., colour patches without knowing the true colour values, by utilising a novel balance-of-linear-distances feature. It is equivalent to determining the camera parameters through an unsupervised process. It also works with a minimum number of corresponding colour patches across the images to be colour aligned to deliver the applicable processing. Two challenging image datasets collected by multiple cameras under various illumination and exposure conditions were used to evaluate the model. Performance benchmarks demonstrated that our model achieved superior performance compared to other popular and state-of-the-art methods.Comment: 14 pages, 8 figures, 2 tables, accepted by IEEE Transactions on Image Processin

    Robust extraction of text from camera images using colour and spatial information simultaneously

    Get PDF
    The importance and use of text extraction from camera based coloured scene images is rapidly increasing with time. Text within a camera grabbed image can contain a huge amount of meta data about that scene. Such meta data can be useful for identification, indexing and retrieval purposes. While the segmentation and recognition of text from document images is quite successful, detection of coloured scene text is a new challenge for all camera based images. Common problems for text extraction from camera based images are the lack of prior knowledge of any kind of text features such as colour, font, size and orientation as well as the location of the probable text regions. In this paper, we document the development of a fully automatic and extremely robust text segmentation technique that can be used for any type of camera grabbed frame be it single image or video. A new algorithm is proposed which can overcome the current problems of text segmentation. The algorithm exploits text appearance in terms of colour and spatial distribution. When the new text extraction technique was tested on a variety of camera based images it was found to out perform existing techniques (or something similar). The proposed technique also overcomes any problems that can arise due to an unconstraint complex background. The novelty in the works arises from the fact that this is the first time that colour and spatial information are used simultaneously for the purpose of text extraction

    FPGA-Based Processor Acceleration for Image Processing Applications

    Get PDF
    FPGA-based embedded image processing systems offer considerable computing resources but present programming challenges when compared to software systems. The paper describes an approach based on an FPGA-based soft processor called Image Processing Processor (IPPro) which can operate up to 337 MHz on a high-end Xilinx FPGA family and gives details of the dataflow-based programming environment. The approach is demonstrated for a k-means clustering operation and a traffic sign recognition application, both of which have been prototyped on an Avnet Zedboard that has Xilinx Zynq-7000 system-on-chip (SoC). A number of parallel dataflow mapping options were explored giving a speed-up of 8 times for the k-means clustering using 16 IPPro cores, and a speed-up of 9.6 times for the morphology filter operation of the traffic sign recognition using 16 IPPro cores compared to their equivalent ARM-based software implementations. We show that for k-means clustering, the 16 IPPro cores implementation is 57, 28 and 1.7 times more power efficient (fps/W) than ARM Cortex-A7 CPU, nVIDIA GeForce GTX980 GPU and ARM Mali-T628 embedded GPU respectively

    Spectral Illumination Correction: Achieving Relative Color Constancy Under the Spectral Domain

    Get PDF
    Achieving color constancy between and within images, i.e., minimizing the color difference between the same object imaged under nonuniform and varied illuminations is crucial for computer vision tasks such as colorimetric analysis and object recognition. Most current methods attempt to solve this by illumination correction on perceptual color spaces. In this paper, we proposed two pixel-wise algorithms to achieve relative color constancy by working under the spectral domain. That is, the proposed algorithms map each pixel to the reflectance ratio of objects appeared in the scene and perform illumination correction in this spectral domain. Also, we proposed a camera calibration technique that calculates the characteristics of a camera without the need of a standard reference. We show that both of the proposed algorithms achieved the best performance on nonuniform illumination correction and relative illumination matching respectively compared to the benchmarked algorithms.This project has received funding from the European Union’s Horizon 2020 research and innovation program under the Marie-Sklodowska-Curie grant agreement No 720325, FoodSmartphone.Peer-reviewedPost-print2018 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT), Louisville, KY, USA, US

    Working together: reflections on a non-hierarchical approach to collaborative writing

    Get PDF
    The process of writing is a cornerstone for academia, reflecting values such as rigour, critique and engagement (Mountz et al., 2015). Academic writing is typically valorized as an individual endeavour, but with the advancement of technology such as synchronous online writing platforms, opportunities to construct scholarly knowledge collaboratively have multiplied (Nykopp et al., 2019). Collaborative writing (CW) involves ‘sharing the responsibility for and the ownership of the entire text produced’ (Storch, 2019, 40), factors that have certainly been enhanced by developing technologies. CW differs from cooperative writing, which involves a division of labour with each individual being assigned to, or completing, a discrete sub-task (Storch, 2019). This chapter discusses the reflections of ten authors from a UK-based research virtual Community of Practice (vCoP) on the challenges and positives encountered during the CW of a research journal article using a shared Google Document
    • …
    corecore